Designing multilayer perceptrons using a Guided Saw-tooth Evolutionary Programming Algorithm
نویسندگان
چکیده
In this paper, a diversity generating mechanism is proposed for an Evolutionary Programming (EP) algorithm that determines the basic structure of Multilayer Perceptron classifiers and simultaneously estimates the coefficients of the models. We apply a modified version of a saw-tooth diversity enhancement mechanism recently presented for Genetic Algorithms, which uses a variable population size and periodic partial reinitializations of the population in the form of a saw-tooth function. Our improvement on this standard scheme consists of guiding saw-tooth reinitializations by considering the variance of the best individuals in the population. The population restarts are performed when the difference of variance between two consecutive generations is lower than a percentage of the previous variance. From the analysis of the results over ten benchmark datasets, it can be concluded that the computational cost of the EP algorithm with a constant population size is reduced by using the original saw-tooth scheme. Moreover, the guided saw-tooth mechanism involves a significantly lower computer time demand than the original scheme. Finally, both saw-tooth schemes do not involve an accuracy decrease and, in general, they obtain a better or similar precision.
منابع مشابه
Saw-Tooth Algorithm Guided by the Variance of Best Individual Distributions for Designing Evolutionary Neural Networks
This paper proposes a diversity generating mechanism for an evolutionary algorithm that determines the basic structure of Multilayer Perceptron (MLP) classifiers and simultaneously estimates the coefficients of the models. We apply a modified version of a recently proposed diversity enhancement mechanism [1], that uses a variable population size and periodic partial reinitializations of the pop...
متن کامل4 . Multilayer perceptrons and back - propagation
Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...
متن کاملComparing Hybrid Systems to Design and Optimize Artificial Neural Networks
In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolut...
متن کاملCo-evolving Multilayer Perceptrons Along Training Sets
When designing artificial neural network (ANN) it is important to optimise the network architecture and the learning coefficients of the training algorithm, as well as the time the network training phase takes, since this is the more timeconsuming phase. In this paper an approach to cooperative co-evolutionary optimisation of multilayer perceptrons (MLP) is presented. The cooperative co-evoluti...
متن کاملLearning Algorithms for Small Mobile Robots: Case Study on Maze Exploration
An emergence of intelligent behavior within a simple robotic agent is studied in this paper. Two control mechanisms for an agent are considered — new direction of reinforcement learning called relational reinforcement learning, and a radial basis function neural network trained by evolutionary algorithm. Relational reinforcement learning is a new interdisciplinary approach combining logical pro...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Soft Comput.
دوره 14 شماره
صفحات -
تاریخ انتشار 2010